An efficient gradient method with approximately optimal stepsizes based on regularization models for unconstrained optimization
نویسندگان
چکیده
It is widely accepted that the stepsize of great significance to gradient method. An efficient method with approximately optimal stepsizes mainly based on regularization models proposed for unconstrained optimization. More specifically, if objective function not close a quadratic line segment between current and latest iterates, model exploited carefully generate stepsize. Otherwise, approximation used. In addition, when curvature non-positive, special developed. The convergence established under some weak conditions. Extensive numerical experiments indicated very promising. Due surprising efficiency, we believe methods can become strong candidates large-scale
منابع مشابه
An Efficient Conjugate Gradient Algorithm for Unconstrained Optimization Problems
In this paper, an efficient conjugate gradient method for unconstrained optimization is introduced. Parameters of the method are obtained by solving an optimization problem, and using a variant of the modified secant condition. The new conjugate gradient parameter benefits from function information as well as gradient information in each iteration. The proposed method has global convergence und...
متن کاملA New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems
In this paper, two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS}) conjugate gradient method are presented to solve unconstrained optimization problems. A remarkable property of the proposed methods is that the search direction always satisfies the sufficient descent condition independent of line search method, based on eigenvalue analysis. The globa...
متن کاملMultivariate spectral gradient method for unconstrained optimization
Multivariate spectral gradient method is proposed for solving unconstrained optimization problems. Combined with some quasi-Newton property multivariate spectral gradient method allows an individual adaptive stepsize along each coordinate direction, which guarantees that the method is finitely convergent for positive definite quadratics. Especially, it converges no more than two steps for posit...
متن کاملA new hybrid conjugate gradient algorithm for unconstrained optimization
In this paper, a new hybrid conjugate gradient algorithm is proposed for solving unconstrained optimization problems. This new method can generate sufficient descent directions unrelated to any line search. Moreover, the global convergence of the proposed method is proved under the Wolfe line search. Numerical experiments are also presented to show the efficiency of the proposed algorithm, espe...
متن کاملan adaptive nonmonotone trust region method for unconstrained optimization problems based on a simple subproblem
using a simple quadratic model in the trust region subproblem, a new adaptive nonmonotone trust region method is proposed for solving unconstrained optimization problems. in our method, based on a slight modification of the proposed approach in (j. optim. theory appl. 158(2):626-635, 2013), a new scalar approximation of the hessian at the current point is provided. our new proposed method is eq...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Rairo-operations Research
سال: 2022
ISSN: ['1290-3868', '0399-0559']
DOI: https://doi.org/10.1051/ro/2022107